Dynamic Programming
Dynamic Programming (DP) is a technique used to optimize recursive algorithms by storing intermediate results, avoiding redundant calculations, and thus significantly improving computational efficiency. It is particularly useful for problems that exhibit overlapping subproblems and optimal substructure.
- Overlapping Subproblems: Problems can be broken down into smaller, overlapping subproblems that are solved multiple times.
- Optimal Substructure: An optimal solution can be constructed efficiently from optimal solutions to its subproblems.
Key Concepts
- Memoization: A top-down approach where results of subproblems are stored (cached) for future use to avoid redundant calculations. It is implemented with recursion and caching.
- Tabulation: A bottom-up approach that solves subproblems iteratively, starting from the smallest subproblems and building up solutions to the larger problem.
Advantages of Dynamic Programming
- Efficiency: Reduces the time complexity significantly in many cases.
- Simplicity: Provides a structured approach to solving complex problems.
- Generalization: Solutions can be adapted to solve a range of similar problems.
Disadvantages of Dynamic Programming
- Memory Usage: High memory consumption due to storage of all subproblem solutions.
- Complexity: Implementing a dynamic programming solution can be complex and requires a deep understanding of the problem.
Optimal Substructure
Problem Description
- Problem: Minimum Cost Climbing Stairs
- You can climb 1 or 2 steps at a time.
- Each step has a cost, represented by a non-negative integer.
- Starting from the ground, determine the minimum cost to reach the top of the stairs.
- Cost Example: For steps with costs of , , and , the minimum cost to reach the 3rd step is .
Dynamic Programming Approach
- Let be the minimum cumulative cost to reach the -th step.
- The cost to reach the -th step is the minimum of the cost from the two previous steps plus the current step's cost:
- Optimal Substructure Property: The optimal solution to a problem can be constructed from optimal solutions of its subproblems.
State Transition and Initial States
- Transition Equation: Provided above.
- Initial States:
Code for Dynamic Programming
def min_cost_climbing_stairs_dp(cost):
n = len(cost)
dp = [0] * (n + 1)
dp[1], dp[2] = cost[1], cost[2]
for i in range(3, n + 1):
dp[i] = min(dp[i-1], dp[i-2]) + cost[i-1]
return dp[n]
Space Optimization
- Compressing the space used by the DP array from to :
def min_cost_climbing_stairs_dp_comp(cost):
prev, curr = cost[0], cost[1]
for i in range(2, len(cost)):
next_step = min(prev, curr) + cost[i]
prev, curr = curr, next_step
return min(prev, curr)
Statelessness
Definition
- Statelessness: A characteristic where the future development of a state depends only on the current state and is independent of how that state was reached.
Problems Illustrating Non-Statelessness
Stair Climbing with Constraints
- Constraint: Cannot jump 1 step twice consecutively.
- DP State Definition Expansion:
- State indicates being on the -th step with the last jump of steps.
- Transition Equation:
- This problem illustrates that by expanding the state definition, we can still manage problems that initially seem to lack statelessness.
Special Case: Stair Climbing with Obstacle Generation
- Problem Description: Climbing to step places an obstacle on step .
- Complexity: Future steps depend on all past jumps, demonstrating that some problems inherently defy statelessness, making dynamic programming a less viable solution. Alternative methods like heuristic search or genetic algorithms might be used for such problems.
Classic Examples of DP Problems
- Fibonacci Sequence: Calculating the nth Fibonacci number.
- Knapsack Problem: Choosing the most valuable items within a weight limit.
- Longest Common Subsequence: Finding the longest sequence shared by two strings.
- Coin Change Problem: Determining the minimum number of coins needed to make a specific amount.